Skip to content

[AI] [FEAT] 계정 ID 생성#356

Open
twq110 wants to merge 1 commit intomainfrom
20260402-#350-executions-계정-분리(account_id)-및-다중-데일리-파이프라인-대응

Hidden character warning

The head ref may contain hidden characters: "20260402-#350-executions-\uacc4\uc815-\ubd84\ub9ac(account_id)-\ubc0f-\ub2e4\uc911-\ub370\uc77c\ub9ac-\ud30c\uc774\ud504\ub77c\uc778-\ub300\uc751"
Open

[AI] [FEAT] 계정 ID 생성#356
twq110 wants to merge 1 commit intomainfrom
20260402-#350-executions-계정-분리(account_id)-및-다중-데일리-파이프라인-대응

Conversation

@twq110
Copy link
Copy Markdown
Contributor

@twq110 twq110 commented Apr 6, 2026

Summary by CodeRabbit

릴리스 노트

  • 새로운 기능

    • 거래 계정 관리: 파이프라인에서 특정 거래 계정을 지정하여 포지션 및 거래 실행을 계정별로 추적할 수 있습니다.
    • 설정 확장: 파이프라인 구성에 계정 코드 필드 추가.
  • 데이터베이스

    • 계정 정보 저장 테이블 추가 및 거래 기록에 계정 ID 추적 기능 추가.

@twq110 twq110 requested a review from discipline24 as a code owner April 6, 2026 07:04
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 6, 2026

Walkthrough

이 PR은 계정 기반 포트폴리오 필터링 기능을 추가하고, Transformer 모델의 피처 스키마 비교 스크립트를 도입합니다. 데이터베이스에 account_names 테이블과 executionsaccount_id 컬럼을 추가하며, 설정 및 저장소 계층을 통해 계정 코드를 전파합니다.

Changes

Cohort / File(s) Summary
Transformer 모델 비교 스크립트
AI/backtests/compare_transformer_feature_schemas.py
두 가지 피처 스키마(dynamic23, fixed17)를 학습 및 비교하는 실행 가능한 스크립트 추가. CLI 인자 파싱, DataLoader를 통한 데이터 로드, 콜백을 포함한 모델 학습, 검증 손실 곡선 병합 및 비교 리포트 생성 기능 포함.
학습 분석 및 설정 아티팩트
AI/backtests/out/transformer_refactor_change_analysis_2026-04-03.md, AI/backtests/out/transformer_schema_compare_smoke/run_config.json
Transformer 리팩토링 변경 이력 분석 문서 및 학습 실행 설정 파일 추가.
설정 계층 - 계정 코드
AI/config/trading.default.json, AI/config/trading.py
PipelineConfigaccount_code 필드 추가 및 기본값("chart_based_signal_model") 설정. 빈 문자열 검증 로직 포함.
데이터베이스 스키마 - 계정 관리
schema.sql
새로운 public.account_names 테이블(account_code 유니크 제약) 추가 및 public.executionsaccount_id 컬럼과 외래키 제약, 인덱스 추가.
저장소 계층 - 계정 필터링
AI/libs/database/repository.py
PortfolioRepository.__init__account_code 파라미터 추가. _resolve_account_id() 메서드로 계정 ID 조회/캐싱. get_current_position, get_current_cash, get_open_tickers, save_executions_to_db 등 여러 메서드에서 account_id 필터링 로직 추가.
파이프라인 - 계정 코드 전파
AI/pipelines/daily_routine.py
run_daily_pipeline에서 PortfolioRepository 생성 시 trading_config.pipeline.account_code 전달.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Pipeline as run_daily_pipeline()
    participant Config as PipelineConfig
    participant Repo as PortfolioRepository
    participant DB as PostgreSQL
    
    User->>Pipeline: 일일 파이프라인 실행
    Pipeline->>Config: trading_config.pipeline.account_code 조회
    Config-->>Pipeline: account_code 반환
    Pipeline->>Repo: PortfolioRepository(db_name, account_code)
    Repo->>Repo: account_code 정규화 및 저장
    Note over Repo: _resolve_account_id() 호출 시<br/>account_id 조회 및 캐싱
    Repo->>DB: SELECT id FROM account_names<br/>WHERE account_code = ?
    DB-->>Repo: account_id (또는 NULL)
    Repo->>Repo: 캐시에 저장
    Pipeline->>Repo: get_current_position(), save_executions_to_db() 등
    Repo->>DB: 쿼리에 AND account_id = ? 필터 적용
    DB-->>Repo: 필터링된 결과
    Repo-->>Pipeline: 데이터 반환
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • discipline24
  • Kosw6

Poem

🐰 계정 코드, 데이터베이스를 관통하고,
스키마 비교 스크립트 춤을 춘다.
저장소는 이제 계정별로 분별하며,
검증 손실 곡선이 나란히 자란다. 🌱✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR 제목 '[AI] [FEAT] 계정 ID 생성'은 핵심 변경사항인 account_id 필드 추가 및 계정 시스템 구현을 명확하게 요약하고 있습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 20260402-#350-executions-계정-분리(account_id)-및-다중-데일리-파이프라인-대응

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
AI/backtests/compare_transformer_feature_schemas.py (2)

97-100: enable_op_determinism 실패 시 로깅 고려.

TensorFlow의 결정론적 연산 활성화는 선택적 기능이지만, 실패 시 조용히 무시하면 재현성 문제 디버깅이 어려울 수 있습니다.

♻️ 로깅 추가 제안
     try:
         tf.config.experimental.enable_op_determinism()
-    except Exception:
-        pass
+    except Exception as e:
+        print(f"[Warning] TF op determinism not enabled: {e}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AI/backtests/compare_transformer_feature_schemas.py` around lines 97 - 100,
The try/except around tf.config.experimental.enable_op_determinism() currently
swallows all errors; update it to catch Exception as e and log a warning with
the exception details so failures to enable determinism are visible for
debugging (use the module logger via logging.getLogger(__name__) or the existing
logger in this module, and ensure logging is imported/available). This change
should target the block containing tf.config.experimental.enable_op_determinism
to replace the silent pass with a descriptive warning including the exception
message/trace.

255-261: 불필요한 int() 캐스팅 제거 가능.

Static analysis에서 지적된 대로 이미 정수인 값에 대한 int() 호출은 불필요합니다.

♻️ 정리 제안
-        min_idx = int(val_loss_series.idxmin())
+        min_idx = val_loss_series.idxmin()
         best_val_loss = _safe_float(val_loss_series.iloc[min_idx])
-        best_epoch = int(history_df.loc[min_idx, "epoch"])
+        best_epoch = history_df.loc[min_idx, "epoch"]
         final_val_loss = _safe_float(val_loss_series.iloc[-1])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AI/backtests/compare_transformer_feature_schemas.py` around lines 255 - 261,
Remove redundant int() casts for fields that are already integers in the dict
assignment (keys "n_samples", "lookback", "horizons", "train_rows",
"epochs_ran"): stop wrapping len(y_class), lookback, len(raw_df), and
len(history_df) with int(), and simplify the horizons construction by removing
the int(h) cast if horizons/h values are already ints (the code building this
dict — the lines assigning "n_samples", "lookback", "horizons", "train_rows",
"epochs_ran" — should return the raw integer values or an identity-mapped
horizons list instead of re-casting to int()).
schema.sql (1)

223-223: account_id nullable 설계 확인 필요.

account_idNOT NULL 제약 없이 nullable로 정의되어 있습니다. 이는 기존 데이터와의 하위 호환성을 위한 의도적 설계로 보이며, repository 코드에서도 account_id IS NULL 케이스를 처리하고 있습니다.

향후 모든 실행이 account 기반으로 전환된 후에는 NOT NULL 제약 추가를 고려해 보세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@schema.sql` at line 223, The column "account_id" is defined nullable—confirm
this is intentional for backward compatibility with legacy executions and ensure
code handles NULLs; update the schema definition around "account_id" to include
a clarifying SQL comment or inline note referencing account_names.id and the
compatibility rationale, and add a TODO/migration note to plan adding a NOT NULL
constraint once all executions are account-bound; also verify repository logic
that checks for "account_id IS NULL" (search for usages of account_id in
repository code) and adjust tests to cover both NULL and non-NULL scenarios.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@AI/backtests/compare_transformer_feature_schemas.py`:
- Around line 29-53: PRE307_DYNAMIC_23 lists MA feature names with underscores
(ma_5_ratio, ma_20_ratio, ma_60_ratio) but add_technical_indicators() actually
generates ma5_ratio, ma20_ratio, ma60_ratio so DataLoader.create_dataset
silently drops those features; update PRE307_DYNAMIC_23 to use the actual
generated names (ma5_ratio, ma20_ratio, ma60_ratio) OR alternatively change
add_technical_indicators() to emit the underscored variants so names match;
ensure the change is consistent with
transformer_refactor_change_analysis_2026-04-03.md and run the dataset creation
(DataLoader.create_dataset) to verify the three MA features are included.
- Around line 159-161: The dataset creation silently drops missing features;
update the compare logic after calling loader.create_dataset (the call that
returns X_ts, X_ticker, X_sector, y_class, _, info) to compute the difference
between requested_features and the actual effective_features returned in info
(or derive effective_features from what was used) and emit a warning log
including schema_name and the list of missing features; if all requested
features are missing, also surface that via a clearer exception or a stronger
log entry to aid debugging.

In `@AI/backtests/out/transformer_schema_compare_smoke/run_config.json`:
- Line 22: The run config currently hardcodes a local Windows user path in the
JSON key "output_dir", which breaks CI/other developers; update the "output_dir"
value to a relative project path (e.g., under ./out/) or make it configurable
via an environment variable and fallback to a relative path, and ensure the
generated smoke-test output directory is added to .gitignore so local artifacts
aren't committed; locate the "output_dir" key in run_config.json and replace the
absolute path accordingly.

In `@AI/libs/database/repository.py`:
- Around line 350-365: The else branch currently only deletes from executions
when account_id is set, omitting xai_reports, portfolio_positions, and
portfolio_summary; update the block around the cursor.execute calls that
reference account_id and target_date so it either (A) issues DELETEs for
xai_reports, portfolio_positions, and portfolio_summary using the same date
filter (without account_id) to perform the intended global-date removals when
allow_global_chain_reset=1 and account_id is provided, or (B) (preferred for
per-account isolation) add account_id columns to those tables and then update
the DELETE statements to include "AND account_id = %s" and pass account_id
(mirroring the executions delete). Modify the code paths that call
cursor.execute (the statements using account_id and target_date) accordingly so
the else block consistently handles those three tables.

---

Nitpick comments:
In `@AI/backtests/compare_transformer_feature_schemas.py`:
- Around line 97-100: The try/except around
tf.config.experimental.enable_op_determinism() currently swallows all errors;
update it to catch Exception as e and log a warning with the exception details
so failures to enable determinism are visible for debugging (use the module
logger via logging.getLogger(__name__) or the existing logger in this module,
and ensure logging is imported/available). This change should target the block
containing tf.config.experimental.enable_op_determinism to replace the silent
pass with a descriptive warning including the exception message/trace.
- Around line 255-261: Remove redundant int() casts for fields that are already
integers in the dict assignment (keys "n_samples", "lookback", "horizons",
"train_rows", "epochs_ran"): stop wrapping len(y_class), lookback, len(raw_df),
and len(history_df) with int(), and simplify the horizons construction by
removing the int(h) cast if horizons/h values are already ints (the code
building this dict — the lines assigning "n_samples", "lookback", "horizons",
"train_rows", "epochs_ran" — should return the raw integer values or an
identity-mapped horizons list instead of re-casting to int()).

In `@schema.sql`:
- Line 223: The column "account_id" is defined nullable—confirm this is
intentional for backward compatibility with legacy executions and ensure code
handles NULLs; update the schema definition around "account_id" to include a
clarifying SQL comment or inline note referencing account_names.id and the
compatibility rationale, and add a TODO/migration note to plan adding a NOT NULL
constraint once all executions are account-bound; also verify repository logic
that checks for "account_id IS NULL" (search for usages of account_id in
repository code) and adjust tests to cover both NULL and non-NULL scenarios.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 44e1ef06-819a-42f2-bc23-6d06f89c323e

📥 Commits

Reviewing files that changed from the base of the PR and between 5b142de and 262cd04.

📒 Files selected for processing (8)
  • AI/backtests/compare_transformer_feature_schemas.py
  • AI/backtests/out/transformer_refactor_change_analysis_2026-04-03.md
  • AI/backtests/out/transformer_schema_compare_smoke/run_config.json
  • AI/config/trading.default.json
  • AI/config/trading.py
  • AI/libs/database/repository.py
  • AI/pipelines/daily_routine.py
  • schema.sql

Comment on lines +29 to +53
PRE307_DYNAMIC_23 = [
"log_return",
"open_ratio",
"high_ratio",
"low_ratio",
"vol_change",
"ma_5_ratio",
"ma_20_ratio",
"ma_60_ratio",
"rsi",
"macd_ratio",
"bb_position",
"us10y",
"yield_spread",
"vix_close",
"dxy_close",
"credit_spread_hy",
"nh_nl_index",
"ma200_pct",
"sentiment_score",
"risk_keyword_cnt",
"per",
"pbr",
"roe",
]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

PRE307_DYNAMIC_23의 MA 피처 이름이 실제 생성되는 피처와 불일치합니다.

ma_5_ratio, ma_20_ratio, ma_60_ratio (밑줄 포함)로 정의되어 있지만, add_technical_indicators()ma5_ratio, ma20_ratio, ma60_ratio (밑줄 없음)를 생성합니다.

분석 문서 (transformer_refactor_change_analysis_2026-04-03.md 섹션 4A)에서도 이 불일치를 언급하고 있습니다. DataLoader.create_dataset은 누락된 피처를 조용히 제외하므로, dynamic23 스키마 실행 시 MA 3개 피처가 실제로는 포함되지 않습니다.

이로 인해 스키마 비교 결과가 왜곡될 수 있습니다.

🐛 수정 제안
 PRE307_DYNAMIC_23 = [
     "log_return",
     "open_ratio",
     "high_ratio",
     "low_ratio",
     "vol_change",
-    "ma_5_ratio",
-    "ma_20_ratio",
-    "ma_60_ratio",
+    "ma5_ratio",
+    "ma20_ratio",
+    "ma60_ratio",
     "rsi",
     "macd_ratio",
     "bb_position",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
PRE307_DYNAMIC_23 = [
"log_return",
"open_ratio",
"high_ratio",
"low_ratio",
"vol_change",
"ma_5_ratio",
"ma_20_ratio",
"ma_60_ratio",
"rsi",
"macd_ratio",
"bb_position",
"us10y",
"yield_spread",
"vix_close",
"dxy_close",
"credit_spread_hy",
"nh_nl_index",
"ma200_pct",
"sentiment_score",
"risk_keyword_cnt",
"per",
"pbr",
"roe",
]
PRE307_DYNAMIC_23 = [
"log_return",
"open_ratio",
"high_ratio",
"low_ratio",
"vol_change",
"ma5_ratio",
"ma20_ratio",
"ma60_ratio",
"rsi",
"macd_ratio",
"bb_position",
"us10y",
"yield_spread",
"vix_close",
"dxy_close",
"credit_spread_hy",
"nh_nl_index",
"ma200_pct",
"sentiment_score",
"risk_keyword_cnt",
"per",
"pbr",
"roe",
]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AI/backtests/compare_transformer_feature_schemas.py` around lines 29 - 53,
PRE307_DYNAMIC_23 lists MA feature names with underscores (ma_5_ratio,
ma_20_ratio, ma_60_ratio) but add_technical_indicators() actually generates
ma5_ratio, ma20_ratio, ma60_ratio so DataLoader.create_dataset silently drops
those features; update PRE307_DYNAMIC_23 to use the actual generated names
(ma5_ratio, ma20_ratio, ma60_ratio) OR alternatively change
add_technical_indicators() to emit the underscored variants so names match;
ensure the change is consistent with
transformer_refactor_change_analysis_2026-04-03.md and run the dataset creation
(DataLoader.create_dataset) to verify the three MA features are included.

Comment on lines +159 to +161
X_ts, X_ticker, X_sector, y_class, _, info = loader.create_dataset(raw_df, feature_columns=requested_features)
if len(y_class) == 0:
raise ValueError(f"[{schema_name}] dataset is empty after preprocessing.")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

누락된 피처에 대한 경고 로깅 추가 권장.

DataLoader.create_dataset은 요청된 피처 중 실제 존재하지 않는 것을 조용히 제외합니다. 스키마 비교 스크립트에서는 requested_featureseffective_features의 차이를 명시적으로 경고하여 비교 신뢰성을 높이는 것이 좋습니다.

♻️ 경고 로깅 추가 제안
     X_ts, X_ticker, X_sector, y_class, _, info = loader.create_dataset(raw_df, feature_columns=requested_features)
+    effective_features = info.get("feature_names", [])
+    missing_features = set(requested_features) - set(effective_features)
+    if missing_features:
+        print(f"[{schema_name}] WARNING: {len(missing_features)} requested features not found: {sorted(missing_features)}")
     if len(y_class) == 0:
         raise ValueError(f"[{schema_name}] dataset is empty after preprocessing.")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AI/backtests/compare_transformer_feature_schemas.py` around lines 159 - 161,
The dataset creation silently drops missing features; update the compare logic
after calling loader.create_dataset (the call that returns X_ts, X_ticker,
X_sector, y_class, _, info) to compute the difference between requested_features
and the actual effective_features returned in info (or derive effective_features
from what was used) and emit a warning log including schema_name and the list of
missing features; if all requested features are missing, also surface that via a
clearer exception or a stronger log entry to aid debugging.

"lr_factor": 0.5,
"min_lr": 1e-06,
"verbose": 2,
"output_dir": "C:\\Users\\jaebin\\Documents\\GitHub\\sisc-web\\AI\\backtests\\out\\transformer_schema_compare_smoke"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

로컬 사용자 경로가 하드코딩되어 있어 다른 개발자/CI 환경에서 사용 불가.

C:\\Users\\jaebin\\... 경로는 특정 개발자의 로컬 환경에 종속되어 있습니다. 이 파일이 smoke test 결과물이라면 .gitignore에 추가하거나, 상대 경로로 변경해야 합니다.

🔧 상대 경로로 변경 제안
-  "output_dir": "C:\\Users\\jaebin\\Documents\\GitHub\\sisc-web\\AI\\backtests\\out\\transformer_schema_compare_smoke"
+  "output_dir": "AI/backtests/out/transformer_schema_compare_smoke"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"output_dir": "C:\\Users\\jaebin\\Documents\\GitHub\\sisc-web\\AI\\backtests\\out\\transformer_schema_compare_smoke"
"output_dir": "AI/backtests/out/transformer_schema_compare_smoke"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AI/backtests/out/transformer_schema_compare_smoke/run_config.json` at line
22, The run config currently hardcodes a local Windows user path in the JSON key
"output_dir", which breaks CI/other developers; update the "output_dir" value to
a relative project path (e.g., under ./out/) or make it configurable via an
environment variable and fallback to a relative path, and ensure the generated
smoke-test output directory is added to .gitignore so local artifacts aren't
committed; locate the "output_dir" key in run_config.json and replace the
absolute path accordingly.

Comment on lines +350 to +365
if account_id is None:
cursor.execute(
"DELETE FROM public.executions WHERE fill_date >= %s AND run_id LIKE 'daily_%%'",
(target_date,),
)
cursor.execute(
"DELETE FROM public.xai_reports WHERE date >= %s AND run_id LIKE 'daily_%%'",
(target_date,),
)
cursor.execute("DELETE FROM public.portfolio_positions WHERE date >= %s", (target_date,))
cursor.execute("DELETE FROM public.portfolio_summary WHERE date >= %s", (target_date,))
else:
cursor.execute(
"DELETE FROM public.executions WHERE fill_date >= %s AND run_id LIKE 'daily_%%' AND account_id = %s",
(target_date, account_id),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# xai_reports 테이블에 account_id 컬럼이 있는지 확인
rg -n 'xai_reports' schema.sql | head -30

Repository: SISC-IT/sisc-web

Length of output: 583


🏁 Script executed:

# Find schema.sql and show the complete xai_reports table definition
find . -name "schema.sql" -o -name "*schema*" -type f | head -10

Repository: SISC-IT/sisc-web

Length of output: 126


🏁 Script executed:

# Read the schema file to see full table definition
cat -n AI/libs/database/schema.sql | sed -n '163,180p'

Repository: SISC-IT/sisc-web

Length of output: 119


🏁 Script executed:

# Also check portfolio_positions and portfolio_summary table definitions
rg -A 20 'CREATE TABLE.*portfolio_positions' schema.sql

Repository: SISC-IT/sisc-web

Length of output: 1049


🏁 Script executed:

# Check portfolio_summary table definition
rg -A 20 'CREATE TABLE.*portfolio_summary' schema.sql

Repository: SISC-IT/sisc-web

Length of output: 1033


🏁 Script executed:

cat -n schema.sql | sed -n '163,200p'

Repository: SISC-IT/sisc-web

Length of output: 2180


🏁 Script executed:

# Look at the executions table to see if it has account_id
rg -B 5 -A 20 'CREATE TABLE.*executions' schema.sql

Repository: SISC-IT/sisc-web

Length of output: 1468


🏁 Script executed:

# Check if there's an account table or accounts relationship
rg -A 10 'CREATE TABLE.*account' schema.sql

Repository: SISC-IT/sisc-web

Length of output: 657


🏁 Script executed:

# Look at the actual code in repository.py with more context
cat -n AI/libs/database/repository.py | sed -n '340,370p'

Repository: SISC-IT/sisc-web

Length of output: 2130


🏁 Script executed:

# See the complete else block for account_id case
cat -n AI/libs/database/repository.py | sed -n '361,375p'

Repository: SISC-IT/sisc-web

Length of output: 1028


allow_global_chain_reset=1일 때 account_id 지정 시 일부 테이블 삭제 누락

account_id 지정 시 executions 테이블만 삭제되고, xai_reports, portfolio_positions, portfolio_summary 테이블의 삭제가 누락되었습니다(라인 362-365).

현재 스키마에서:

  • executions: account_id 컬럼 O → 현재 코드로 계정별 필터링 작동
  • xai_reports, portfolio_positions, portfolio_summary: account_id 컬럼 X → 계정별 필터링 불가능

따라서 account_id 지정 시:

  1. 이들 테이블도 삭제해야 한다면, 삭제 쿼리를 추가하거나 (단, 전체 데이터 삭제 위험)
  2. 계정별 격리가 필요하다면, 각 테이블에 account_id 컬럼을 추가하고 외래키 연결 후 필터링 적용 필요

라인 362-365의 else 블록을 완성하여 의도된 동작을 명확히 하세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AI/libs/database/repository.py` around lines 350 - 365, The else branch
currently only deletes from executions when account_id is set, omitting
xai_reports, portfolio_positions, and portfolio_summary; update the block around
the cursor.execute calls that reference account_id and target_date so it either
(A) issues DELETEs for xai_reports, portfolio_positions, and portfolio_summary
using the same date filter (without account_id) to perform the intended
global-date removals when allow_global_chain_reset=1 and account_id is provided,
or (B) (preferred for per-account isolation) add account_id columns to those
tables and then update the DELETE statements to include "AND account_id = %s"
and pass account_id (mirroring the executions delete). Modify the code paths
that call cursor.execute (the statements using account_id and target_date)
accordingly so the else block consistently handles those three tables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant